Skip to content

Add: fyaronskiy/english_code_retriever results#296

Merged
Samoed merged 2 commits intoembeddings-benchmark:mainfrom
fedor28:en_code_retriever
Oct 10, 2025
Merged

Add: fyaronskiy/english_code_retriever results#296
Samoed merged 2 commits intoembeddings-benchmark:mainfrom
fedor28:en_code_retriever

Conversation

@fedor28
Copy link
Contributor

@fedor28 fedor28 commented Oct 10, 2025

Add results on MTEB(Code, v1), RTEB(Code, beta), CoIR benchmarks

  • My model has a model sheet, report or similar
  • My model has a reference implementation in mteb/models/ this can be as an API. Instruction on how to add a model can be found here
  • The results submitted is obtained using the reference implementation
  • My model is available, either as a publicly accessible API or publicly on e.g., Huggingface
  • I solemnly swear that for all results submitted I have not trained on the evaluation dataset including training splits. If I have I have disclosed it clearly.

@github-actions
Copy link

Model Results Comparison

Reference models: intfloat/multilingual-e5-large, google/gemini-embedding-001
New models evaluated: fyaronskiy/english_code_retriever
Tasks: AppsRetrieval, COIRCodeSearchNetRetrieval, CodeEditSearchRetrieval, CodeFeedbackMT, CodeFeedbackST, CodeSearchNetCCRetrieval, CodeSearchNetRetrieval, CodeTransOceanContest, CodeTransOceanDL, CosQA, DS1000Retrieval, FreshStackRetrieval, HumanEvalRetrieval, MBPPRetrieval, StackOverflowQA, SyntheticText2SQL, WikiSQLRetrieval

Results for fyaronskiy/english_code_retriever

task_name fyaronskiy/english_code_retriever google/gemini-embedding-001 intfloat/multilingual-e5-large Max result
AppsRetrieval 0.0804 0.9375 0.3255 0.9463
COIRCodeSearchNetRetrieval 0.7423 0.8106 nan 0.8951
CodeEditSearchRetrieval 0.3534 0.8161 0.5038 0.8161
CodeFeedbackMT 0.4401 0.5628 0.4278 0.9370
CodeFeedbackST 0.5779 0.8533 0.7426 0.9067
CodeSearchNetCCRetrieval 0.4271 0.8469 0.7783 0.9635
CodeSearchNetRetrieval 0.8655 0.9133 0.8412 0.9397
CodeTransOceanContest 0.6068 0.8953 0.7403 0.9496
CodeTransOceanDL 0.3516 0.3147 0.3128 0.4419
CosQA 0.2556 0.5024 0.348 0.5218
DS1000Retrieval 0.3242 0.6870 nan 0.6897
FreshStackRetrieval 0.183 0.3979 0.2519 0.4438
HumanEvalRetrieval 0.7182 0.9910 nan 0.9945
MBPPRetrieval 0.7207 0.9416 nan 0.9416
StackOverflowQA 0.5653 0.9671 0.8889 0.9717
SyntheticText2SQL 0.4279 0.6996 0.5307 0.7875
WikiSQLRetrieval 0.8792 0.8814 nan 0.9608
Average 0.5011 0.7658 0.5576 0.8298

@@ -0,0 +1 @@
{"name": "fyaronskiy/english_code_retriever", "revision": "be653fab7d27a7348a0c2c3d16b9f92a7f10cb0c", "release_date": null, "languages": [], "n_parameters": null, "memory_usage_mb": null, "max_tokens": null, "embed_dim": null, "license": null, "open_weights": true, "public_training_code": null, "public_training_data": null, "framework": ["Sentence Transformers"], "reference": null, "similarity_fn_name": "cosine", "use_instructions": null, "training_datasets": null, "adapted_from": null, "superseded_by": null, "is_cross_encoder": null, "modalities": ["text"], "loader": null} No newline at end of file
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you update your model meta with what you've submited in PR?

@Samoed Samoed merged commit f044c34 into embeddings-benchmark:main Oct 10, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants